perm filename ARTIFI.NS[F83,JMC] blob sn#736656 filedate 1983-12-28 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	n034  1051  28 Dec 83
C00018 ENDMK
CāŠ—;
n034  1051  28 Dec 83
BC-ARTIFIC-INTEL 2takes
(Newhouse 003)
For weekend use
(Note to editors: Please display the following above byline)
''We are now at the dawn of a new computer revolution ... the
transition from information processing to knowledge processing, from
computers that calculate and store data to computers that reason and
inform.'' - ''The Fifth Generation,'' by Edward A. Feigenbaum and
Pamela McCorduck
By PATRICK YOUNG
Newhouse News Service
    WASHINGTON - When scientists tried, years ago, to program a computer
to translate ''The spirit is willing, but the flesh is weak'' into
Russian and then back into English, the machine came up with: ''The
vodka is strong, but the meat is rotten.''
    The field of artificial intelligence has come a long way since that
botched translation.
    Today, artificial intelligence programs called ''expert systems''
are searching for minerals, helping to diagnose diseases and even
doing some simple language translations.
    But the ultimate goal of artificial intelligence - a machine that
can match the human brain - remains on the far side of enormous
obstacles.
    ''At the current time, anything you could think of as a barrier -
is,'' says Jerome A. Feldman of the University of Rochester, N.Y. ''A
robot that walks down the street safely? We're not anywhere near
that.''
    Scientists want to create artificial intelligence programs that will
emulate human thought. But such a thinking machine requires building
the powers of learning, reasoning, memory, common sense, judgment and
inference. And these, in turn, require some machine version of sight,
hearing, speech and natural language understanding - even the senses
of smell and touch.
    ''There are a lot of things that go into intelligence,'' says Scott
Fahlman of Carnegie-Mellon University in Pittsburgh. ''We're making
progress in these different abilities at different rates.''
    From workrooms to war rooms, machines with human-level intelligence
could prove enormously important in planning and decision-making. But
right now, nobody knows how to give a computer the common sense of a
2-year-old or teach it to reason by analogy or understand subtle
nuances of language.
    How, for example, is a computer to comprehend the true meaning of
the classic threat in ''The Godfather'' - ''I'll make him an offer he
can't refuse.''
    Though perfection is far from imminent, scientists are making
progress.
    Private companies envision a lucrative market. International
Resource Development, a consulting firm in Norwalk, Conn., predicts
artificial intelligence systems will be nearly a $9 billion industry
by 1993.
    Japan plans to spend more than $1 billion over a decade to develop a
new generation of intelligent computers, and the U.S. Defense
Department alone wants to spend at least half that amount in the next
five years to beat the Japanese effort.
    Social thinkers fret about the psychological and economic impact of
artificial intelligence, which the private, non-profit National
Research Council says ''would surely create a new economics, a new
sociology and a new history.'' How might people react to taking
orders from a machine brighter than they are?
    Some forecasters see tens of millions of manufacturing jobs lost to
supersmart robots. But the jobs threat isn't confined to the factory.
    ''I think the middle-management level is the one that is most
vulnerable,'' says Kent K. Curtis, acting director of computer
research at the federal government's National Science Foundation.
    Today's ''expert systems'' indicate why.
    Scientists create these systems by first interviewing one or more
experts in a specific field to learn in great detail how they reach
conclusions.
    Then a program is written that includes thousands of step-by-step
''if-then'' rules - if one thing is true, then the computer proceeds
a certain way. The computer uses these rules, which contain the
experts' knowledge and reasoning patterns, to make decisions.
    ''These systems are very good at being real specialists where you
don't need a lot of breadth,'' Fahlman says.
    DELTA, developed by General Electric Co., is a system that helps
locomotive maintenance personnel to diagnose and repair problems more
efficiently. Given the symptoms of a problem, it asks a series of
questions - to be answered ''yes'' or ''no'' - and orders various
checks and tests, much as an experienced repairman would.
    Dipmeter Adviser helps petroleum geologists at Schlumberger Ltd.,
the giant oil-service company, to analyze rock corings from
exploratory wells. PUFF is a system for helping to diagnose lung
diseases. And Prospector is used to search for valuable ore deposits.
    Impressive as they are, these systems are seen as primitive
forerunners of true artificial intelligence.
    ''They can only work in a specialized domain of knowledge that has
already been digested for them; they have no ability to break out
from that domain,'' Curtis says. ''They have no perceptual ability,
no cognitive ability, and no way of extending their knowledge.''
    Many experts consider common sense the most difficult goal in
building a bona fide thinking machine. This everyday reasoning comes
naturally to even young children, but remains beyond the capability
of computers.
    Common sense involves a mental outlook that philosopher Daniel C.
Dennett describes as ''all things being equal.'' It is a reasoning
born, in part, of experience.
    Common sense tells us that, ''all things being equal,'' we won't
find a bear behind our closet door. So we open it without thinking
about the possibility, or about any other danger common sense tells
us not to expect.
    ''We understand the strategy, but how we are able to do it is quite
mysterious,'' says Dennett, of Tufts University in Medford, Mass.
    ''We learn how to behave by use of our appreciation of what is
normal. We exclude things that are not normal - but we don't exclude
them by thinking about them and then excluding them. To get a
computer to have expectations based on an understanding of what is
normal is a very tricky problem that has not been cracked.''
    Current artificial intelligence programs can do literal translations
of English sentences reasonably well. But they cannot understand the
nuances that are needed to translate a full text and accurately
convey its intended meaning.
    Neither can computers understand and act on any but the simplest
commands.
    Computers that understand the breadth and depth of natural language
would be of enormous help. But understanding even a simple sentence
requires a knowledge of the world and an ability to interpret meaning
from context and even tone of voice.
    ''The exact same statement will mean different things to different
people because of what is expected of them,'' says Elaine Rich of the
University of Texas at Austin.
    ''Suppose your wife says to you, 'The toilet doesn't work.' She
means she wants you to fix it. If she says that to your 5-year-old
child, she means, 'Don't use it.' Obviously, she doesn't expect a
5-year-old to fix a toilet.''
    Vision is equally difficult.
    Humans can quickly grasp the important details of very complex
scenes. Machine vision - using a computer to interpret the images
picked up by a television camera - is primitive at best.
    Under controlled conditions, computers can identify specific parts
of an image and tell if they are assembled properly. They can read
printed numbers and letters, though not handwritten script.
    The kind of vision a robot would need to safely roam the streets or
countryside continues to elude scientists.
    Rochester's Feldman offers an example of the many vision problems
that remain unsolved - judging distance.
    Humans do this quite easily, but getting a computer to do it has
proved extremely difficult. Feldman says the brain actually makes use
of about 20 clues in determining distance.
    The eyes are positioned differently when we look afar than when we
examine something close. Subtle differences occur in colors and the
texture of materials with distance. Look down a railroad track,
Feldman says, and you will see the ties - although equally spaced -
appear increasingly closer at a distance.
    Breaking such barriers as common sense, language and vision will
require far deeper understanding of how the human brain works.
    But a thinking, reasoning system also will require major changes in
how computers operate. Currently, computers work sequentially,
carrying out their operations one step at a time.
    But the brain performs many functions at once by what is called
parallel processing. Considerable effort is going into building
parallel processing into computers so that they, too, can do many
things simultaneously.
    ''The problem here is not so much storing all this information, but
getting at the right piece at the right time,'' Fahlman says.
JM END YOUNG
(DISTRIBUTED BY THE INDEPENDENT PRESS SERVICE)
    
nyt-12-28-83 1350est
***************